* Spokesmen for Club Paradise Not a spokesman for *
* Members ONLY (John 3:3) Zenith Data Systems *
##
Subject: 3d Skull needed.
Date: Tue, 29 Jun 1993 00:03:36 -0500 (CDT)
From: Cliff Lee <cel@tenet.edu>
Hey Imagineers,
Anyone have a PD model of a skull I can use in Imagine? Needed for a
dungeon scene under construction. I am using the PC version.
Cliff Lee
cel@tenet.edu
##
Subject: Re: Grass
Date: Tue, 29 Jun 93 00:33:48 -0600
From: kholland@chicoma.lanl.gov (AIDE Kiernan)
>Better yet, do everyone a favor and
>unsubscribe from this list. You have absolutely nothing pertinent or useful
>to say and you repeatedly show complete disregard for posting guidlines.
Here here!!!!! What he said!
Sorry for quoting what was blatantly obvious the first time around, but
I just had to say I fully and completely agree with the above! And I mean it.
----------------------
No... I'm sticking to this...
It is kinda funny... This is all about Imagine, and like I am not
puting a 510 frame animation together right now with the help of Rend24
and like I don't have a 530 frame 358x240 24-bit animation on
the way (which I calculate will take 20 days just because of the complexity
of the objects). I am not a real Imagineer, but I do use the program
to make my objects and flight paths... I'm less a hands-on animator
and more a programmer.. I guess I really don't belong here, but then again
this is where some of the others who are doing the type of stuff I am doing
frequently visit... Of course they don't say much, but just the same
they exist.. I gues you could say I am "Pro-Imagine-Object Editing"
but "Anti-Imagine object rendering"... I really believe that I
could leave the computing up to a few machines which are almost
doing nothing 24 hours a day but maintaining the system.
So, I'm here for those who would like to get some REAL computing
power into their animations... My oftware isn't complex or imposible,
it is actually pretty simple.. Just learning to automate my
workspace and having fun at the same time.
Can't wait, can't wait, can't wait... Can't wait to get those
4 Indigos and that SGI Challenge ( a 64 grand server that goes 150 MIPS).
The SU's (two of them) haven't had any luck hooking the machines
up, but I have found how much fun you can have with two networked
Indigo's... Anyone ever play the DOGFIGHT??? That is a heck uva
lot of fun... Let me see.. 30+30+30+30+150 = 270 Mips??? not bad
considering our total attendance in the CS dept. here is like 100...
small school... I'm probably going to be the only AIDE using the
SGI's... As soon as we get it hooked up, I will take orders
here on things people would like to see 270 Mips of power render...
There isn't any limit really... all 4 Indigos each have 16megs
of memory and the Challenge has a 1.2 gig hard drive..
PS- Has anyone been able to view pictures in the 1152x812 (resolution?)
res 24-bit mode on the Retina... I heard it had that mode...
Just what I was looking for.. Will probably buy it (probably, my foot...
heck a NeXT user could only wish he had that resolution... ;-) ).
Kiernan
##
Subject: Re: AT3D-Demo
Date: Tue, 29 Jun 93 01:31:07 EST
From: ad99s461@sycom.mi.org (Alex Deburie)
On Mon 28-Jun-1993 11:24pm, glewis@pcocd2.intel.com (Glenn M. Lewis - ICD ~)
said:
> ...and *possibly* credit T3DLIB for portions of his source code.
> I refuse to directly accuse anyone of copying, as that would be libel if
> I could not prove it (and can not), but I know for a fact that if it
> happened in this instance, that this would be the *third* time that
> T3DLIB was used without permission (and therefore illegally) in the
> creation of a commercial (or for-profit) product.
>
> With all due respect,
> and without malice or
> ill will toward anyone,
> Glenn Lewis
Glenn,
After reading this and doing some counting (on my fingers) of the
current shareware/commercial object converters, I felt it necessary
to speak up.
I sincerely hope you do not think that Vertex, my shareware /
commercial object editor & converter has any basis on your code from
T3DLIB. Actually, I wasn't even aware your code existed in the public
domain until well after I had written the initial routines.
I worked long and hard on the Imagine/Turbo Silver file format. Making
judiscious use of a compare program, and a hex dump program, I can
assure you the Imagine/TurboSilver routines in Vertex are entirlely
written by me, with no help from anyone.
With equal respect and without malice or ill will,
-- Alex Deburie
ad99s461@sycom.mi.org
The Art Machine
##
Subject: Apex
Date: Tue, 29 Jun 1993 03:19:53 -0500
From: Daniel Jr Murrell <djm2@Ra.MsState.Edu>
Well, I got the new Apex newsletter today. Amazing new textures! Thanks Steve,
Glenn, and the gang. But since I've pretty much converted to Real 3D now, I
can't wait to see what you do with it. How would you do it? Would you write
your own RPL handlers? (RPL is the built-in programming language of Real 3D)
Danimal
djm2@ra.msstate.edu
##
Subject: Re: AGA Imagine
Date: Tue, 29 Jun 93 02:17:17 -0600
From: kholland@chicoma.lanl.gov (AIDE Kiernan)
now according to an ancient newletter I had recieved. I have Aladdin 2.1 and it
has AGA file format availibility-for anims as well as brushmaps. The software ha
----------------
Do you like it??? I was considering getting it... I'm very frugal
and I always say "I'm thinking about getting XXX", it is just that there
is some many neat toys out there, I think I will get one of each....
;-)
----------------
really can't understand Impulse's lack of support. Where's the 25th Annivesary
Star Trek object disk promised? What happened to their deal with Viewpoint for
pro objects? Why are Steve Worley & Glenn Lewis doing more for Imagine users
than the company that owns the software? I really don't want to use Imagine
----------------
Good guys and bad guys... It is not wierd that someone external would master
the art of using and supporting Imagine like Glenn and Steve, what is
amazing is is that we get to talk to them... Impulse doesn't want to talk
to anybody...
----------------
just for modelling but if things don't change fairly soon I think alot of
people will do just that and rely on Interchange/PixelPro to move objects
to a better rendering program.
----------------
I doubt it... I'm starting to agree with the Imagine users that
Imagine is lower priced and well featured for what it does.. I don't
use its animation capabilities because my computer is just a wimpy
16MHz 68030 with 68881... I don't want to waste money on a 35MHz 68040,
so I just choose to do my animations using a external machine..
##
Subject: Re: Omni ham-6 anim and pics uploaded to ftp.wustl.edu
Date: Tue, 29 Jun 93 02:43:57 -0600
From: kholland@chicoma.lanl.gov (AIDE Kiernan)
I also uploaded 2 jpeg still images for you less ambitious types (it's
only 5 meg for part1 and 10 meg for part2 come on! 8-)
--------------
Sounds like something I am doing... Is there a trend here... ;-)
I have a 530 frame animation which I estimate will be 18 megs
when completed, but that is compressed under JPEG... decompressed
I expect it will be 20-40 megs... yuck...
The first fifty frames took up two disks... Yikes...
##
Subject: Re: Imagine to Rayshade
Date: Tue, 29 Jun 93 01:56:32 -0600
From: kholland@chicoma.lanl.gov (AIDE Kiernan)
Ok this is my first post to the list so here goes ---->
At my uni I use a dec 3100 workstation and have been playing
about with rayshade for a while now and want to be able to deign
objects in a cad like enviroment either on the dec or on my amiga
using either Imagine or Real 3d .......
Is there any way to convert imagine objects to rayshade
scripts ? Or better still Real 3d objects ? (I LOVE CSG *8)_ )
Hope someone can help..............
Graeme Wilson
On the net as ---oooOOO Git OOOooo---
--------------------------------
This is exactly... I mean exactly what I have been doing... I
have a Decstation 5000 with Rayshade and am using
Imagine 1.1 to make objects and fly-throughs for rayshade...
I wrote some C code to do the job for me.. Can you write/read
C source?? If so, readon you might find this stuff interesting..
I made some specially written source to do animations
in rayshade...
This is how I do my things (some think I am crazy but I
am glad to see someone else who wants to do the same things I am
doing...):
1. I make my objects in imagine...
2. I convert the objects to rayshade using Glenn Lewis'
T3DLIB programs: tddd2ray and tddd2off.
A. I use tddd2ray to convert inidvidual imagine objects to
a rayshade LISTed object. (list ... stuff ... end).
The result is then filtered through my rayshadeconverter
which strips the ambient/translate lines that (Glenn listen here)
screw up Rayshade 4.0... So removing these lines correct this
problem.
B. Tddd2off is used on my path objects (which are just a bunch of
dots that run in the order in which i placed them in Imagine.)
Tddd2off will maintain this order and will output a ".geom"
file which contains the points in the paths I made in Imagine.
(Note: I have no idea what the first line (in the .geom file) meant
so I skip it in my RAYFLY program...)
3. The objects and paths (.ray and .geom files) are copied to a MSDOS
floppy disk using CrossDos. Then the objects are unloaded onto
my account on the UNIX machine through a MSDOS machine with NFS on it.
4. I make the scene file so that it #includes the cammera, target
positions, objects and lights that I will use. #includes make it
incredibly easy to perform complex effects that would be impossible
to implement on Imagine (like conforming an object to a 4D function)
without some serious help from one of the local wizards...
5. Taking a last look at my setup, making sure everything is in place...
checking screen resolution, flight paths, etc.. I prepare to
execute my special rayshade animation program with individual
target and cammera path support.
6. Last, I execute rayfly (rayshade flythrough) and put it in the background.
While I am gone, the system will keep my process in the background forever,
at least until lightning strikes, the power company shuts off the power,
etc.. It will run until all the frames are made... after each frame
is made rayfly compresses each frame using JPEG 3.0 and deletes the original
RLE frame in the process. Since Getx11 support a movie mode I have a program
called "Display" that decompresses each frame and concatenates all the frames
together into one file and tells GetX11 to view the joined RLE
file in Movie mode... I have written pratically everything to
automate the entire process:
Pack - does what any hard disk backup program does but is written
to use existing Unix commands to perform the complex tasks...
ConvertRay - strips the ambient/diffuse/translate lines from
files converted using tddd2ray.
MakeLight - converts spheres in rayshade format into lights with a specific
radius (equal to or a percentage of the spheres radius).
MakeEntry - takes a set of rayshade text files and automatically
generates the #includes for the raayshade scene file
to include each object and it names the object...
Convert2Rend24 - renames my special file naming format to something
rend24 will accept.
Display - explained above...
DisplayRend24 - I'm lazy... if I convert to rend24 naming format,
I just use this instead of "display".
Random - randomnly places objects on the X and Y axis. Used to do my
grass test..
I have a directory full of automating commands that make the process more
easy... Part of the reason why I created them was in anticipation
that someone here would like to use them or get some ideas how to
do animations using a PD rendering program.
Coming soon... Rayfly with an added morphing feature (however it will only
be limited to one object at a time.. It will simply interpolate
points of two versions of the same object and store the in-between
object in a file which will be used in the animation over a period
of frames. I plan to make it possible to synchronize the morphing
points with the cammera and target positions..
I don't know why I am talking about a "Coming soon", I haven't
even started writing the code, but I know it will have to work...
It will not work if you try to morph a object that is 180 degrees
different from the start to the end.... Its okay if you want
to make something look like it is turning inside out... ;-)..
Kiernan
Have you played around with the image mapping and fractal brownian
motion texturing?? What is a lot of fun is to set the stereo
view offset then render stereo pairs, get some Xspecs and
view... Actually I don't have Xspecs, I do it the cheap way, I
put two GetX11 windows close together and I view both frames
parallel with the paths of my eyes (like a viewmaster toy).
As soon as someone tells me how to do blurring in Rayshade, I
will implement that too...
##
Subject: When we can expect Imagine 3.0
Date: Tue, 29 Jun 93 10:26:45 BST
From: ecl6gum@sun.leeds.ac.uk
Hi,
Does anyone know of a release date (if there is one) for Imagine v3.0?
Having just bought an Amiga, I've been trying to get hold of a copy of Imagine v2.0 here in the UK. All the mail order and local Amiga stores don't have any
copies in stock, and say that they'll have to order a copy specially from the
states. One said that the release of Imagine v3.0 was "imminent"!
I'm not to keen on buying Real 3D, but equally don't want to wait for months forImagine v3.0 to appear.